114 research outputs found
Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
Trial-and-error based reinforcement learning (RL) has seen rapid advancements
in recent times, especially with the advent of deep neural networks. However,
the majority of autonomous RL algorithms require a large number of interactions
with the environment. A large number of interactions may be impractical in many
real-world applications, such as robotics, and many practical systems have to
obey limitations in the form of state space or control constraints. To reduce
the number of system interactions while simultaneously handling constraints, we
propose a model-based RL framework based on probabilistic Model Predictive
Control (MPC). In particular, we propose to learn a probabilistic transition
model using Gaussian Processes (GPs) to incorporate model uncertainty into
long-term predictions, thereby, reducing the impact of model errors. We then
use MPC to find a control sequence that minimises the expected long-term cost.
We provide theoretical guarantees for first-order optimality in the GP-based
transition models with deterministic approximate inference for long-term
planning. We demonstrate that our approach does not only achieve
state-of-the-art data efficiency, but also is a principled way for RL in
constrained environments.Comment: Accepted at AISTATS 2018
Distributed Gaussian Processes
To scale Gaussian processes (GPs) to large data sets we introduce the robust
Bayesian Committee Machine (rBCM), a practical and scalable product-of-experts
model for large-scale distributed GP regression. Unlike state-of-the-art sparse
GP approximations, the rBCM is conceptually simple and does not rely on
inducing or variational parameters. The key idea is to recursively distribute
computations to independent computational units and, subsequently, recombine
them to form an overall result. Efficient closed-form inference allows for
straightforward parallelisation and distributed computations with a small
memory footprint. The rBCM is independent of the computational graph and can be
used on heterogeneous computing infrastructures, ranging from laptops to
clusters. With sufficient computing resources our distributed GP model can
handle arbitrarily large data sets.Comment: 10 pages, 5 figures. Appears in Proceedings of ICML 201
Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms
Differential privacy is concerned about the prediction quality while
measuring the privacy impact on individuals whose information is contained in
the data. We consider differentially private risk minimization problems with
regularizers that induce structured sparsity. These regularizers are known to
be convex but they are often non-differentiable. We analyze the standard
differentially private algorithms, such as output perturbation, Frank-Wolfe and
objective perturbation. Output perturbation is a differentially private
algorithm that is known to perform well for minimizing risks that are strongly
convex. Previous works have derived excess risk bounds that are independent of
the dimensionality. In this paper, we assume a particular class of convex but
non-smooth regularizers that induce structured sparsity and loss functions for
generalized linear models. We also consider differentially private Frank-Wolfe
algorithms to optimize the dual of the risk minimization problem. We derive
excess risk bounds for both these algorithms. Both the bounds depend on the
Gaussian width of the unit ball of the dual norm. We also show that objective
perturbation of the risk minimization problems is equivalent to the output
perturbation of a dual optimization problem. This is the first work that
analyzes the dual optimization problems of risk minimization problems in the
context of differential privacy
Meta Reinforcement Learning with Latent Variable Gaussian Processes
Learning from small data sets is critical in many practical applications
where data collection is time consuming or expensive, e.g., robotics, animal
experiments or drug design. Meta learning is one way to increase the data
efficiency of learning algorithms by generalizing learned concepts from a set
of training tasks to unseen, but related, tasks. Often, this relationship
between tasks is hard coded or relies in some other way on human expertise. In
this paper, we frame meta learning as a hierarchical latent variable model and
infer the relationship between tasks automatically from data. We apply our
framework in a model-based reinforcement learning setting and show that our
meta-learning model effectively generalizes to novel tasks by identifying how
new tasks relate to prior ones from minimal data. This results in up to a 60%
reduction in the average interaction time needed to solve tasks compared to
strong baselines.Comment: 11 pages, 7 figure
Learning deep dynamical models from image pixels
Modeling dynamical systems is important in many disciplines, e.g., control,
robotics, or neurotechnology. Commonly the state of these systems is not
directly observed, but only available through noisy and potentially
high-dimensional observations. In these cases, system identification, i.e.,
finding the measurement mapping and the transition mapping (system dynamics) in
latent space can be challenging. For linear system dynamics and measurement
mappings efficient solutions for system identification are available. However,
in practical applications, the linearity assumptions does not hold, requiring
non-linear system identification techniques. If additionally the observations
are high-dimensional (e.g., images), non-linear system identification is
inherently hard. To address the problem of non-linear system identification
from high-dimensional observations, we combine recent advances in deep learning
and system identification. In particular, we jointly learn a low-dimensional
embedding of the observation by means of deep auto-encoders and a predictive
transition model in this low-dimensional space. We demonstrate that our model
enables learning good predictive models of dynamical systems from pixel
information only.Comment: 10 pages, 11 figure
Probabilistic Inference of Twitter Users' Age based on What They Follow
Twitter provides an open and rich source of data for studying human behaviour
at scale and is widely used in social and network sciences. However, a major
criticism of Twitter data is that demographic information is largely absent.
Enhancing Twitter data with user ages would advance our ability to study social
network structures, information flows and the spread of contagions. Approaches
toward age detection of Twitter users typically focus on specific properties of
tweets, e.g., linguistic features, which are language dependent. In this paper,
we devise a language-independent methodology for determining the age of Twitter
users from data that is native to the Twitter ecosystem. The key idea is to use
a Bayesian framework to generalise ground-truth age information from a few
Twitter users to the entire network based on what/whom they follow. Our
approach scales to inferring the age of 700 million Twitter accounts with high
accuracy.Comment: 9 pages, 9 figure
Neural Embeddings of Graphs in Hyperbolic Space
Neural embeddings have been used with great success in Natural Language
Processing (NLP). They provide compact representations that encapsulate word
similarity and attain state-of-the-art performance in a range of linguistic
tasks. The success of neural embeddings has prompted significant amounts of
research into applications in domains other than language. One such domain is
graph-structured data, where embeddings of vertices can be learned that
encapsulate vertex similarity and improve performance on tasks including edge
prediction and vertex labelling. For both NLP and graph based tasks, embeddings
have been learned in high-dimensional Euclidean spaces. However, recent work
has shown that the appropriate isometric space for embedding complex networks
is not the flat Euclidean space, but negatively curved, hyperbolic space. We
present a new concept that exploits these recent insights and propose learning
neural embeddings of graphs in hyperbolic space. We provide experimental
evidence that embedding graphs in their natural geometry significantly improves
performance on downstream tasks for several real-world public datasets.Comment: 7 pages, 5 figure
- …